Search results for "Markov property"
showing 6 items of 6 documents
Statistics of transitions for Markov chains with periodic forcing
2013
The influence of a time-periodic forcing on stochastic processes can essentially be emphasized in the large time behaviour of their paths. The statistics of transition in a simple Markov chain model permits to quantify this influence. In particular the first Floquet multiplier of the associated generating function can be explicitly computed and related to the equilibrium probability measure of an associated process in higher dimension. An application to the stochastic resonance is presented.
ℓ1-Penalized Methods in High-Dimensional Gaussian Markov Random Fields
2016
In the last 20 years, we have witnessed the dramatic development of new data acquisition technologies allowing to collect massive amount of data with relatively low cost. is new feature leads Donoho to define the twenty-first century as the century of data. A major characteristic of this modern data set is that the number of measured variables is larger than the sample size; the word high-dimensional data analysis is referred to the statistical methods developed to make inference with this new kind of data. This chapter is devoted to the study of some of the most recent ℓ1-penalized methods proposed in the literature to make sparse inference in a Gaussian Markov random field (GMRF) defined …
A New Tool for the Modeling of AI and Machine Learning Applications: Random Walk-Jump Processes
2011
Published version of an article from the book: Hybrid artificial intelligent systems, Lecture notes in computer science. The original publication is available at www.springerlink.com, http://dx.doi.org/10.1007/978-3-642-21219-2_2 There are numerous applications in Artificial Intelligence (AI) and Machine Learning (ML) where the criteria for decisions are based on testing procedures. The most common tools used in such random phenomena involve Random Walks (RWs). The theory of RWs and its applications have gained an increasing research interest since the start of the last century. [1]. In this context, we note that a RW is, usually, defined as a trajectory involving a series of successive ran…
Convergence of Markov Chains
2020
We consider a Markov chain X with invariant distribution π and investigate conditions under which the distribution of X n converges to π as n→∞. Essentially it is necessary and sufficient that the state space of the chain cannot be decomposed into subspaces that the chain does not leave, or that are visited by the chain periodically; e.g., only for odd n or only for even n.
Income distribution dynamics: monotone Markov chains make light work
1995
This paper considers some aspects of the dynamics of income distributions by employing a simple Markov chain model of income mobility. The main motivation of the paper is to introduce the techniques of “monotone” Markov chains to this field. The transition matrix of a discrete Markov chain is called monotone if each row stochastically dominates the row above it. It will be shown that by embedding the dynamics of the income distribution in a monotone Markov chain, a number of interesting results may be obtained in a straightforward and intuitive fashion.
Pairwise Markov properties for regression graphs
2016
With a sequence of regressions, one may generate joint probability distributions. One starts with a joint, marginal distribution of context variables having possibly a concentration graph structure and continues with an ordered sequence of conditional distributions, named regressions in joint responses. The involved random variables may be discrete, continuous or of both types. Such a generating process specifies for each response a conditioning set that contains just its regressor variables, and it leads to at least one valid ordering of all nodes in the corresponding regression graph that has three types of edge: one for undirected dependences among context variables, another for undirect…